|
EyePACS LLC
alexnet neural network model Alexnet Neural Network Model, supplied by EyePACS LLC, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/alexnet neural network model/product/EyePACS LLC Average 90 stars, based on 1 article reviews
alexnet neural network model - by Bioz Stars,
2026-04
90/100 stars
|
Buy from Supplier |
|
MathWorks Inc
alexnet dcnn model ![]() Alexnet Dcnn Model, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 96/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/alexnet dcnn model/product/MathWorks Inc Average 96 stars, based on 1 article reviews
alexnet dcnn model - by Bioz Stars,
2026-04
96/100 stars
|
Buy from Supplier |
|
SoftMax Inc
alexnet ![]() Alexnet, supplied by SoftMax Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/alexnet/product/SoftMax Inc Average 90 stars, based on 1 article reviews
alexnet - by Bioz Stars,
2026-04
90/100 stars
|
Buy from Supplier |
|
SoftMax Inc
classification model softmax ![]() Classification Model Softmax, supplied by SoftMax Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/classification model softmax/product/SoftMax Inc Average 90 stars, based on 1 article reviews
classification model softmax - by Bioz Stars,
2026-04
90/100 stars
|
Buy from Supplier |
|
MathWorks Inc
alexnet ![]() Alexnet, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/alexnet/product/MathWorks Inc Average 90 stars, based on 1 article reviews
alexnet - by Bioz Stars,
2026-04
90/100 stars
|
Buy from Supplier |
|
Kaggle Inc
alexnet ![]() Alexnet, supplied by Kaggle Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/alexnet/product/Kaggle Inc Average 90 stars, based on 1 article reviews
alexnet - by Bioz Stars,
2026-04
90/100 stars
|
Buy from Supplier |
|
SoftMax Inc
resnet-50+softmax ![]() Resnet 50+Softmax, supplied by SoftMax Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/resnet-50+softmax/product/SoftMax Inc Average 90 stars, based on 1 article reviews
resnet-50+softmax - by Bioz Stars,
2026-04
90/100 stars
|
Buy from Supplier |
|
Rocha labs
alexnet ![]() Alexnet, supplied by Rocha labs, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/alexnet/product/Rocha labs Average 90 stars, based on 1 article reviews
alexnet - by Bioz Stars,
2026-04
90/100 stars
|
Buy from Supplier |
|
Kaggle Inc
penultimate features from alexnet ![]() Penultimate Features From Alexnet, supplied by Kaggle Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/penultimate features from alexnet/product/Kaggle Inc Average 90 stars, based on 1 article reviews
penultimate features from alexnet - by Bioz Stars,
2026-04
90/100 stars
|
Buy from Supplier |
|
Hinton labs
alexnet ![]() Alexnet, supplied by Hinton labs, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/alexnet/product/Hinton labs Average 90 stars, based on 1 article reviews
alexnet - by Bioz Stars,
2026-04
90/100 stars
|
Buy from Supplier |
|
MathWorks Inc
alexnet network ![]() Alexnet Network, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more https://www.bioz.com/result/alexnet network/product/MathWorks Inc Average 90 stars, based on 1 article reviews
alexnet network - by Bioz Stars,
2026-04
90/100 stars
|
Buy from Supplier |
Image Search Results
Journal: bioRxiv
Article Title: Object representations drive emotion schemas across a large and diverse set of daily-life scenes
doi: 10.1101/2025.02.19.638854
Figure Lengend Snippet: a) Decoding emotions from deep convolutional neural network (DCNN) representations using partial least squares regression. b) fc8 layer in the AlexNet model outperformed fc8 layer in EmoNet model in predicting emotion ratings. Each line and dot represent the result of a cross-validated fold. c) AlexNet consistently outperformed EmoNet in predicting emotion ratings across three subsets of BOLD5000 images. d) EmoNet outperformed AlexNet in predicting emotion ratings for the Cowen17 dataset. In contrast, for the BOLD5000 dataset, AlexNet outperformed EmoNet in predicting emotion ratings. Error bars represent the standard error across cross-validated folds.
Article Snippet: The
Techniques:
Journal: bioRxiv
Article Title: Object representations drive emotion schemas across a large and diverse set of daily-life scenes
doi: 10.1101/2025.02.19.638854
Figure Lengend Snippet: a) Results of decoding emotions from AlexNet representations. These results show that emotional information is processed hierarchically in a visual object processing system. b) Differences between AlexNet layers in predicting emotion ratings. c) The hierarchical processing of emotional information was consistent across the three subsets of BOLD5000 images. d) The hierarchical processing of emotional information was consistent regardless of whether the BOLD5000 or Cowen17 dataset was used. Error bars represent the standard error across cross-validated folds and each dot represents the result of a cross-validated fold.
Article Snippet: The
Techniques:
Journal: bioRxiv
Article Title: Object representations drive emotion schemas across a large and diverse set of daily-life scenes
doi: 10.1101/2025.02.19.638854
Figure Lengend Snippet: The representational similarity analysis results indicated that both a) the conv1 layer and b) the fc8 layer of AlexNet exhibited greater within-cluster similarity than between-cluster similarity, suggesting that both layers encode emotional information, regardless of the number of K-means clusters. c) The fc8 layer demonstrated a larger difference in pattern similarity between within-cluster and between-cluster comparisons than the conv1 layer, indicating that the fc8 layer encodes more emotional information. Error bars represent the standard error across clusters and each dot represents the result of a cluster.
Article Snippet: The
Techniques:
Journal: Computational and Mathematical Methods in Medicine
Article Title: Early Diagnosis of Brain Tumour MRI Images Using Hybrid Techniques between Deep and Machine Learning
doi: 10.1155/2022/8330833
Figure Lengend Snippet: AlexNet architecture.
Article Snippet: All images are also diagnosed using deep learning techniques for two models, namely,
Techniques:
Journal: Computational and Mathematical Methods in Medicine
Article Title: Early Diagnosis of Brain Tumour MRI Images Using Hybrid Techniques between Deep and Machine Learning
doi: 10.1155/2022/8330833
Figure Lengend Snippet: Hybrid architecture between deep and machine learning: (a) AlexNet+SVM; (b) ResNet-18+SVM.
Article Snippet: All images are also diagnosed using deep learning techniques for two models, namely,
Techniques:
Journal: Computational and Mathematical Methods in Medicine
Article Title: Early Diagnosis of Brain Tumour MRI Images Using Hybrid Techniques between Deep and Machine Learning
doi: 10.1155/2022/8330833
Figure Lengend Snippet: Adjusted training parameters of ResNet-18 and AlexNet models.
Article Snippet: All images are also diagnosed using deep learning techniques for two models, namely,
Techniques: Biomarker Discovery
Journal: Computational and Mathematical Methods in Medicine
Article Title: Early Diagnosis of Brain Tumour MRI Images Using Hybrid Techniques between Deep and Machine Learning
doi: 10.1155/2022/8330833
Figure Lengend Snippet: (a) Confusion matrix for AlexNet to evaluate MRI brain tumours. (b) Confusion matrix for ResNet-18 to evaluate MRI brain tumours.
Article Snippet: All images are also diagnosed using deep learning techniques for two models, namely,
Techniques:
Journal: Computational and Mathematical Methods in Medicine
Article Title: Early Diagnosis of Brain Tumour MRI Images Using Hybrid Techniques between Deep and Machine Learning
doi: 10.1155/2022/8330833
Figure Lengend Snippet: (a) Confusion matrix for AlexNet+SVM to evaluate MRI brain tumours. (b) Confusion matrix for ResNet-18+SVM to evaluate MRI brain tumours.
Article Snippet: All images are also diagnosed using deep learning techniques for two models, namely,
Techniques:
Journal: Computational and Mathematical Methods in Medicine
Article Title: Early Diagnosis of Brain Tumour MRI Images Using Hybrid Techniques between Deep and Machine Learning
doi: 10.1155/2022/8330833
Figure Lengend Snippet: Results of diagnosing brain tumours using deep learning models and hybrid deep and machine learning techniques.
Article Snippet: All images are also diagnosed using deep learning techniques for two models, namely,
Techniques:
Journal: Computational and Mathematical Methods in Medicine
Article Title: Early Diagnosis of Brain Tumour MRI Images Using Hybrid Techniques between Deep and Machine Learning
doi: 10.1155/2022/8330833
Figure Lengend Snippet: Diagnostic accuracy of the four models for diagnosing each tumour class.
Article Snippet: All images are also diagnosed using deep learning techniques for two models, namely,
Techniques: Diagnostic Assay
Journal: Applied Soft Computing
Article Title: The ensemble deep learning model for novel COVID-19 on CT images
doi: 10.1016/j.asoc.2020.106885
Figure Lengend Snippet: AlexNet network structure diagram.
Article Snippet: In first experiment,
Techniques:
Journal: Applied Soft Computing
Article Title: The ensemble deep learning model for novel COVID-19 on CT images
doi: 10.1016/j.asoc.2020.106885
Figure Lengend Snippet: AlexNet_Softmax classification results.
Article Snippet: In first experiment,
Techniques:
Journal: Applied Soft Computing
Article Title: The ensemble deep learning model for novel COVID-19 on CT images
doi: 10.1016/j.asoc.2020.106885
Figure Lengend Snippet: AlexNet_Softmax classification evaluation index.
Article Snippet: In first experiment,
Techniques:
Journal: PLoS Computational Biology
Article Title: Using deep neural networks to evaluate object vision tasks in rats
doi: 10.1371/journal.pcbi.1008714
Figure Lengend Snippet: ( a , b ) The full sets of size and azimuth-rotation combinations of the two objects used in the behavioral task. For copyright reasons we do not show the stimuli from the original paper, but images of similar objects created in Blender 2.91 for illustrative purposes only. For the model we did use the originals from . Rats were first trained on a subset of 14 of these transformations (purple) and subsequently asked to generalize to the remaining 40 novel combinations (green). ( c , d ) Average percentage correct discrimination of object 1 and object 2 by models incorporating increasingly more DNN layers of AlexNet (c) and VGG16 (d) (the X-axis indicates the highest DNN layer). Each model’s decoder layer was first trained on the same object transformations as the rats, after which performance was evaluated on these trained transformations (purple) as well as the untrained transformations (green). Horizontal lines indicate the average behavioral performance across rats reported by Zoccolan et al. . Black and grey bars on the X-axis indicate layer blocks and markers indicate layer types (see legend insert); the division between convolutional and fully connected layer blocks is indicated by a dashed line. ( e , f ) Percentage variance in performance across stimuli explained by each DNN layer (estimated by a linear mapping fit on the data and stimuli of the training set only) from AlexNet (e) and VGG16 (f). Same conventions as in (c,d). All error bounds are 95% confidence intervals calculated using Jackknife standard error estimates (resampling size and azimuth-rotation combinations).
Article Snippet:
Techniques:
Journal: PLoS Computational Biology
Article Title: Using deep neural networks to evaluate object vision tasks in rats
doi: 10.1371/journal.pcbi.1008714
Figure Lengend Snippet: ( a , b ) Single frames of all the training (a) and test (b) videos that rats were asked to classify in the behavioral task. Each rat was trained with a subset of 15 videos (purple), and tested for generalization with 40 novel videos (green). Ten test videos (the 5 pairs with natural distractor in the left-most green rectangle) were modified to further probe the rats, for example by reducing playback speed to 25% or equalizing average pixel values in the lower-half of the videos [see ]. ( c , d ) Average percentage correct classification of rat versus non-rat frame bin pairs by models incorporating increasingly more DNN layers from AlexNet, VGG16, and VGG11-C3D (the X-axis indicates the highest DNN layer) and for natural (c) and scrambled (d) distractors separately. Performance is evaluated on the training set (purple; all 50 target-distractor combinations) as well as the test set (green; the 25 tested target-distractor pairs, with 25% playback speed and pixel value modifications for 5 pairs [see (b)]). Black and grey bars on the X-axis indicate layer blocks and markers indicate layer types (see legend insert); the division between convolutional and fully connected layer blocks is indicated by a dashed line. Horizontal lines indicate the average behavioral performance across rats. ( e , f ) Percentage variance in performance across target-distractor pairs explained by each DNN layer (estimated by a linear mapping fit on the training set data and stimuli only) from AlexNet, VGG16, and VGG11-C3D (left to right) and for natural (e) and scrambled (f) distractors. Same conventions as in as in c,d. All error bounds are 95% confidence intervals calculated using Jackknife standard error estimates (resampling target-distractor pairs).
Article Snippet:
Techniques: Modification
Journal: PLoS Computational Biology
Article Title: Using deep neural networks to evaluate object vision tasks in rats
doi: 10.1371/journal.pcbi.1008714
Figure Lengend Snippet: ( a ) Top: the reference object (purple) and 11 distractor objects (rest) that rats were trained to discriminate in the behavioral task. For copyright reasons we do not show the stimuli from the original paper, but similar silhouette images were redrawn by hand or adapted from close-matching clip-art ( https://openclipart.org/ ) for illustrative purposes only. For the model we did use the originals from . The 3 example distractors of Fig 1 in Djurdjevic et al. are highlighted in yellow, green, and light blue. Bottom: in a later phase of the experiment, the rats were trained to tolerate size changes in all objects from 15° to 35° of visual angle (here only shown for the subset of 4 objects indicated in color on the top). ( b , c ) Percentage of variance in average discrimination performance of good performing (blue) and poorer performing (red) rats, explained by each DNN layer (estimated by a linear mapping with leave-one-out cross-validation) from AlexNet (b) and VGG16 (c). The linear mapping was estimated using the data available in Figs 1 and 2 of Djurdjevic et al. : all sizes of the 4 example objects indicated by purple, yellow, green, and light blue in (a), and the 30° size for the remaining eight objects. Black and grey bars on the X-axis indicate layer blocks and markers indicate layer types (see legend insert); the division between convolutional and fully connected layer blocks is indicated by a dashed line. Error bounds are 95% confidence intervals calculated using Jackknife standard error estimates (resampling stimuli). ( d ) Top: average discrimination performances of good performing rats, as a function of object size, for the reference object and the 3 example distractors. Middle: average predicted discrimination performances of good performing rats, based on pool2 of AlexNet, which explained the most out-of-sample variance in behavioral performance. Bottom: same as middle, but for conv1 of AlexNet. ( e ) Same as (d), but for poorer performing rats.
Article Snippet:
Techniques: Biomarker Discovery
Journal: PLoS Computational Biology
Article Title: Using deep neural networks to evaluate object vision tasks in rats
doi: 10.1371/journal.pcbi.1008714
Figure Lengend Snippet: ( a - c ) Neural RDMs for V1, LI, and TO data. Rows and columns correspond to 16-frame bins: nine per video, with first the five rat videos, then the five non-rat videos, and finally the scrambled versions of the natural videos in the same order. The color scale corresponds to percentiles of each RDM’s Pearson correlation distances (excluding the diagonal values). ( d - f ) Artificial neural network RDMs (Pearson correlation distance) for 4 layers of AlexNet, VGG16, and VGG11-C3D: the first layer, the earliest layer for which the RDM corresponds better to extra-striate (LI/TO) data, the layer with the best normalized correlation with neural data (TO in all three cases), and the last layer (fc8). ( g - i ) Spearman correlations between each artificial neural network layer RDM and each neural RDM (calculated using above diagonal elements only), normalized by each area’s noise ceiling (V1 in blue, LI in yellow, TO in red). Black and grey bars on the X-axis indicate layer blocks and markers indicate layer types (see legend insert); the division between convolutional and fully connected layer blocks is indicated by a dashed line. Grey text labels indicate the DNN RDMs shown in (d-f). Error bounds are 95% confidence intervals calculated using Jackknife standard error estimates (resampling neural units). ( j - l ) Two-dimensional representation of similarities between neural and artificial neural network RDMs, derived from applying non-metric multidimensional scaling on Spearman correlation distances between RDMs. Each marker corresponds to an RDM and similar RDMs are plotted closer together. Text labels indicate the neural RDMs and the DNN RDMs shown in (d-f).
Article Snippet:
Techniques: Derivative Assay, Marker
Journal: PLoS Computational Biology
Article Title: Using deep neural networks to evaluate object vision tasks in rats
doi: 10.1371/journal.pcbi.1008714
Figure Lengend Snippet: Nomenclature used to refer to each architecture’s layers and the division across layer blocks.
Article Snippet:
Techniques: Blocking Assay
Journal: Scientific Reports
Article Title: Bayesian optimized multimodal deep hybrid learning approach for tomato leaf disease classification
doi: 10.1038/s41598-024-72237-x
Figure Lengend Snippet: Review of existing leaf disease methodologies with limitations.
Article Snippet: Da
Techniques: Extraction, Modification
Journal: Scientific Reports
Article Title: Bayesian optimized multimodal deep hybrid learning approach for tomato leaf disease classification
doi: 10.1038/s41598-024-72237-x
Figure Lengend Snippet: Comparison of the proposed approach with the latest approaches (tomato leaf 10 classes).
Article Snippet: Da
Techniques: Comparison, Modification
Journal: Scientific Reports
Article Title: Bayesian optimized multimodal deep hybrid learning approach for tomato leaf disease classification
doi: 10.1038/s41598-024-72237-x
Figure Lengend Snippet: Comparison of the suggested approach with recently established models for various crops.
Article Snippet: Da
Techniques: Comparison
Journal: Scientific Reports
Article Title: Bayesian optimized multimodal deep hybrid learning approach for tomato leaf disease classification
doi: 10.1038/s41598-024-72237-x
Figure Lengend Snippet: Comparison of the proposed model's training parameters with state-of-the-art models.
Article Snippet: Da
Techniques: Comparison
Journal: Frontiers in Big Data
Article Title: MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis
doi: 10.3389/fdata.2021.589417
Figure Lengend Snippet: Our approach identifies the most salient regions in different classes for image classification using AlexNet. From top to bottom: original image, MARGIN’s explanation overlaid on the image, and Grad-CAM’s explanation. Note our approach yields highly specific, and sparse explanations from different regions in the image for a given class.
Article Snippet: For
Techniques:
Journal: Nature Communications
Article Title: Digit-tracking as a new tactile interface for visual perception analysis
doi: 10.1038/s41467-019-13285-0
Figure Lengend Snippet: Convolutional Neural network computation of saliency maps. A model of artificial vision was used to predict the most salient areas in an image and test whether attention maps derived from digit-tracking and eye-tracking exploration data are sensitive to the same features in visual scenes. a Convolutional Neural Network architecture. The first five convolutional layers of the AlexNet Network were used for features map extraction, and features are linearly combined in the last layer to produce saliency maps (e.g., Fig. ). b Hierarchical ordering of learned weights in the last layer of the convolutional neural network (CNN). X -axis denotes the 256 outputs while the Y -axis denotes the mean Pearson correlation between an individual channel and the measured saliency map. Each channel can be seen as a saliency map sensitive to a single feature class in the picture. A strong positive correlation coefficient indicates a highly attractive feature while a strong negative correlation indicates a highly avoided feature. c Correlation between the weights learned using eye and digit-tracking (Set A r pearson = 0.95, p < 1 × 10 −128 –Set B r pearson = 0.96, p < 1 × 10 −147 ). d High-level features are visualized by identifying in the picture database the most responsive pixels for the considered CNN channel. Example of the most attractive and the most avoided features corresponding to, respectively, the 3 most positively correlated and the 3 most negatively correlated channels of the CNN. Human explorations are particularly sensitive to eyes or eye-like areas, faces and highly contrasted details, while uniform areas with natural colors, textures, repetitive symbols are generally avoided.
Article Snippet: To this end, the
Techniques: Derivative Assay, Extraction